For years we heard that FAI is very complex problem, with hundreds solutions suggested and no one proved. But than Musk said: lets give AI to everybody—and everybody seems to think that it is magical solution to all our problems. It may be. But I want it to be proved rigoriously. Unfortunately it seems that Open AI was created not to search solution, but to prove that one exact solution is the best. It would lead to rationalizing.
I don’t think that is the best way for safety.
It remind me a story of LHC. After a decision to build it was made, they created a committee to prove it safety. And not surprisingly they found that it is safe (the did some unproved assumption to do so, see Adrian Kent article on the topic).
I really want to see technical prove that Open AI is safe and I want to find that this prove was made before they made the decision.
For years we heard that FAI is very complex problem, with hundreds solutions suggested and no one proved. But than Musk said: lets give AI to everybody—and everybody seems to think that it is magical solution to all our problems. It may be. But I want it to be proved rigoriously. Unfortunately it seems that Open AI was created not to search solution, but to prove that one exact solution is the best. It would lead to rationalizing. I don’t think that is the best way for safety. It remind me a story of LHC. After a decision to build it was made, they created a committee to prove it safety. And not surprisingly they found that it is safe (the did some unproved assumption to do so, see Adrian Kent article on the topic). I really want to see technical prove that Open AI is safe and I want to find that this prove was made before they made the decision.